- These Sony headphones eased my XM5 envy with all-day comfort and plenty of bass
- I compared a $190 robot vacuum to a $550 one. Here's my buying advice
- I finally found a reliable wireless charger for all of my Google devices - including the Pixel Watch
- 6 ways to turn your IT help desk into a strategic asset
- How to install and use Microsoft's PowerShell on Linux (and why you should)
Risks & Responses to Cloud-Based Storage Misconfigurations
Misconfigurations remain one of the most common risks in the technology world. Simply telling organisations to “fix” this problem, however, is not as easy as it might first seem because there’s a myriad of technologies at play in modern infrastructure deployments. All of this results in a complicated mix of hardening approaches for each system.
What is key, then, is to identify where hardening is required and then consider the methodology for each area. Even something as simple as data storage requires detailed planning to ensure that security controls provide robust protection not just on Day One but for all time regardless of where that data is.
Understanding the Cloud’s Security Risks
For starters, it’s important to consider that both private and public (cloud-hosted) networks are susceptible to the risks associated with these compliance objectives. For data stored in the cloud, we continue to see inappropriate access controls applied to online storage, resulting in leaked data as well as organisations storing credentials in insecure ways.
Unfortunately, these problems are not unique to the online world. Incorrect permissions are an easy way for insider threats to become more costly by exposing more data. And storage of credentials in insecure documents or scripts remains a common way for outsiders to find new ways to expand their access. Ultimately, many of the same risks exist for data regardless of where it is.
There have been improvements in this area in recent years, but whilst “security by default” is a common approach for IaaS providers, these systems are not fool-proof and are even sometimes ignored, which further complicates things. And for those who are still deploying physical or virtual machines in their own data center, there is a worrying lack of default hardening applied to templated server builds that results in each deployment requiring additional security effort each and every time a new machine is provisioned.
How to Respond to These Risks
Preventing these issues in the first place is therefore incredibly important. If thieves know you leave the vault door open all the time, you’re going to be facing a constant barrage of new attackers exploiting your error. But once you’ve closed the door, ensuring it stays that way remains just as important. A one-off security incident might result in you fixing your exposed data stores, but it’s only through regularly auditing the protection you put in place (and ensuring no new weaknesses creep into the system) that you can stay secure.
Automating checking therefore remains key. You cannot rely on a once-a-year check when it only takes moments for attackers to extract vital data or place spyware that allows them to keep an eye on and then circumvent any other security countermeasures you put in place. Automation should also ideally be smart enough to automatically account for new configurations. There’s little point checking the same set of servers or storage services when your organisation has expanded beyond these, leaving new areas of unprotected infrastructure.
Once you have automation in place, there is still a final piece in the puzzle – ensuring the automation is working. Countless organisations have been caught out by an out-of-date anti-virus pattern or inherited access control list that they had assumed was automatically managed. When things go wrong, alerts need to be generated and responded to promptly, and for that, there needs to be a standard process that should ensure you know how to respond to the most common scenarios quickly and easily.
With all of this in mind, your data protection model must be flexible enough to cover multiple system types. Integrating your authentication infrastructure and access request systems such that they provide a unified front- and back-end system to establish ownership and access to data can greatly simplify things here, too, reducing friction for end users and security administrators at the same time. Once established, it should permit automation and easy remediation strategies to be successful, too.
Security within Reach
A surprising number of organisations today don’t know where all there data is, much less have a unified model for managing access to it or a model to audit and rectify issues. But all these controls are within reach for most organisations. So what are you waiting for?